Graph neural networks (GNNs) have received great attention due to their success in various graph-related learning tasks. Several GNN frameworks have then been developed for fast and easy implementation of GNN models. Despite their popularity, they are not well documented, and their implementations and system performance have not been well understood. In particular, unlike the traditional GNNs that are trained based on the entire graph in a full-batch manner, recent GNNs have been developed with different graph sampling techniques for mini-batch training of GNNs on large graphs. While they improve the scalability, their training times still depend on the implementations in the frameworks as sampling and its associated operations can introduce non-negligible overhead and computational cost. In addition, it is unknown how much the frameworks are 'eco-friendly' from a green computing perspective. In this paper, we provide an in-depth study of two mainstream GNN frameworks along with three state-of-the-art GNNs to analyze their performance in terms of runtime and power/energy consumption. We conduct extensive benchmark experiments at several different levels and present detailed analysis results and observations, which could be helpful for further improvement and optimization.
translated by 谷歌翻译
A new method for solving the wave equation is presented, called the learned Born series (LBS), which is derived from a convergent Born Series but its components are found through training. The LBS is shown to be significantly more accurate than the convergent Born series for the same number of iterations, in the presence of high contrast scatterers, while maintaining a comparable computational complexity. The LBS is able to generate a reasonable prediction of the global pressure field with a small number of iterations, and the errors decrease with the number of learned iterations.
translated by 谷歌翻译
Actively monitoring machine learning models during production operations helps ensure prediction quality and detection and remediation of unexpected or undesired conditions. Monitoring models already deployed in big data environments brings the additional challenges of adding monitoring in parallel to the existing modelling workflow and controlling resource requirements. In this paper, we describe (1) a framework for monitoring machine learning models; and, (2) its implementation for a big data supply chain application. We use our implementation to study drift in model features, predictions, and performance on three real data sets. We compare hypothesis test and information theoretic approaches to drift detection in features and predictions using the Kolmogorov-Smirnov distance and Bhattacharyya coefficient. Results showed that model performance was stable over the evaluation period. Features and predictions showed statistically significant drifts; however, these drifts were not linked to changes in model performance during the time of our study.
translated by 谷歌翻译
Although prediction models for delirium, a commonly occurring condition during general hospitalization or post-surgery, have not gained huge popularity, their algorithmic bias evaluation is crucial due to the existing association between social determinants of health and delirium risk. In this context, using MIMIC-III and another academic hospital dataset, we present some initial experimental evidence showing how sociodemographic features such as sex and race can impact the model performance across subgroups. With this work, our intent is to initiate a discussion about the intersectionality effects of old age, race and socioeconomic factors on the early-stage detection and prevention of delirium using ML.
translated by 谷歌翻译
Proteins play a central role in biology from immune recognition to brain activity. While major advances in machine learning have improved our ability to predict protein structure from sequence, determining protein function from structure remains a major challenge. Here, we introduce Holographic Convolutional Neural Network (H-CNN) for proteins, which is a physically motivated machine learning approach to model amino acid preferences in protein structures. H-CNN reflects physical interactions in a protein structure and recapitulates the functional information stored in evolutionary data. H-CNN accurately predicts the impact of mutations on protein function, including stability and binding of protein complexes. Our interpretable computational model for protein structure-function maps could guide design of novel proteins with desired function.
translated by 谷歌翻译
套件是指准备和分组必要的零件和工具(或“套件”)以在制造环境中组装。自动化此过程可简化人工工人的组装任务,并提高效率。现有的自动化套件系统遵守脚本指示和预定义的启发式方法。但是,鉴于零件和逻辑延迟的可用性差异,现有系统的僵化性可以限制装配线的整体效率。在本文中,我们提出了一个双重优化框架,以使机器人能够执行基于任务分割的零件选择,套件布置和交付计划,以及时提供定制的套件 - 即在需要时正确。我们通过人类主题研究(n = 18)评估了提出的方法,涉及基于研究的数据构建平板家具桌和购物流仿真。我们的结果表明,与使用由任务图本身定义的刚性任务分割边界定义的基线方法相比,与基线方法相比,与基线方法相比,即将到来的套件系统更有效,对上游商店流量延迟有弹性,并且比较更好地优选。单个套件,包括组装单个单元所需的所有零件。
translated by 谷歌翻译
心室心动过速(VT)可能是全世界425万人心脏死亡的原因之一。治疗方法是导管消融,以使异常触发区域失活。为了促进和加快消融过程中的定位,我们提出了基于卷积神经网络(CNN)的两种新型定位技术。与现有方法相反,例如使用ECG成像,我们的方法被设计为独立于患者特异性的几何形状,直接适用于表面ECG信号,同时还提供了二元透射位置。一种方法输出排名的替代解决方案。可以在通用或患者的几何形状上可视化结果。对CNN进行了仅包含模拟数据的数据集培训,并在模拟和临床测试数据上进行了评估。在模拟数据上,中值测试误差低于3mm。临床数据上的中位定位误差低至32mm。在所有临床病例中,多达82%的透壁位置被正确检测到。使用排名的替代溶液,在临床数据上,前3个中值误差下降到20mm。这些结果证明了原理证明使用CNN来定位激活源,而无需固有的患者特定的几何信息。此外,提供多种解决方案可以帮助医生在多个可能的位置中找到实际激活源。通过进一步的优化,这些方法具有加快临床干预措施的高潜力。因此,他们可以降低程序风险并改善VT患者的结局。
translated by 谷歌翻译
图像引导放射疗法中的CBCT为患者的设置和计划评估提供了关键的解剖学信息。纵向CBCT图像登记可以量化分裂间的解剖变化。这项研究的目的是提出一个无监督的基于深度学习的CBCT-CBCT变形图像登记。提出的可变形注册工作流程包括训练和推理阶段,这些培训和推理阶段通过基于空间转换的网络(STN)共享相同的进率前路。 STN由全球生成对抗网络(Globalgan)和本地GAN(Localgan)组成,分别预测了粗略和细尺度运动。通过最小化图像相似性损失和可变形矢量场(DVF)正则化损失,而无需监督地面真实DVF的训练,对网络进行了训练。在推理阶段,训练有素的Localgan预测了局部DVF的斑块,并融合形成全图像DVF。随后将局部全图像DVF与Globalgan生成的DVF合并以获得最终的DVF。在实验中,使用来自20名腹部癌症患者的100个分数CBCT评估了该方法,并在保持测试中来自21名不同腹部癌症患者的队列中的105个分数CBCT。从定性上讲,注册结果显示了变形的CBCT图像与目标CBCT图像之间的对齐。定量地,在基准标记和手动确定的地标计算的平均目标注册误差(TRE)为1.91+-1.11 mm。变形CBCT和目标CBCT之间的平均平均绝对误差(MAE),归一化的跨相关性(NCC)分别为33.42+-7.48 HU,0.94+-0.04。这种有希望的注册方法可以提供快速准确的纵向CBCT对准,以促进分流的解剖变化分析和预测。
translated by 谷歌翻译
我们介绍了贴片采样时间表(PSS)的概念,该概念在训练过程中每批次使用的视觉变压器(VIT)贴片的数量变化。由于对于大多数视觉目标(例如,分类),所有补丁都不同样重要,因此我们认为,不太重要的补丁可以用于较少的训练迭代中,从而导致较短的训练时间,对性能的影响最小。此外,我们观察到,使用PSS的训练可以使VIT在推理过程中对更宽的贴片采样范围更强。这允许在推理过程中进行吞吐量和准确性之间的细粒度,动态的权衡。我们使用PSSS在VIT上评估Imagenet的VIT,均通过从头开始训练并使用重建损耗函数进行了预训练。对于预训练的模型,与使用所有斑块相比,我们的分类准确性降低了0.26%(从25小时到17小时)降低了0.26%。代码,模型检查点和日志可在https://github.com/bradmcdanel/pss上找到。
translated by 谷歌翻译
我们实施了两个不同的三维深度学习神经网络,并评估了它们在非对比度计算机断层扫描(CT)上看到的颅内出血(ICH)的能力。一种模型,称为“沿正交关注u-net沿正交级别的素隔离”(Viola-Unet),其体系结构元素可适应2022年实例的数据挑战。第二个比较模型是从No-New U-NET(NNU-NET)得出的。输入图像和地面真理分割图用于以监督方式分别训练两个网络。验证数据随后用于半监督培训。在5倍交叉验证期间比较了模型预测。中提琴 - UNET的表现优于四个性能指标中的两个(即NSD和RVD)的比较网络。将中提琴和NNU-NET网络组合的合奏模型在DSC和HD方面的性能最高。我们证明,与3D U-NET相关的ICH分割性能优势有效地合并了U-NET的解码分支期间的空间正交特征。 Viola-Unet AI工具的代码基础,预估计的权重和Docker图像将在https://github.com/samleoqh/viola-unet上公开获得。
translated by 谷歌翻译